Model fit

Column

Assumption checks

Column

Indices of model fit

Metric Value
AIC 1755.67
AICc 1756.16
BIC 1774.83
R2 (cond.) 0.79
R2 (marg.) 0.29
ICC 0.71
RMSE 23.50
Sigma 25.59

For interpretation of performance metrics, please refer to this documentation.

Parameter estimates

Column

Plot

Column

Tabular summary

# Fixed Effects
Parameter Coefficient SE 95% CI t(174) p
(Intercept) 251.41 6.63 (238.32, 264.49) 37.91 < .001
Days 10.47 1.50 (7.50, 13.43) 6.97 < .001
# Random Effects
Parameter Coefficient SE 95% CI
SD (Intercept: Subject) 23.78 5.58 (15.02, 37.66)
SD (Days: Subject) 5.72 1.19 (3.81, 8.59)
Cor (Intercept~Days: Subject) 0.08 0.32 (-0.49, 0.61)
SD (Residual) 25.59 1.51 (22.80, 28.73)

To find out more about table summary options, please refer to this documentation.

Predicted Values

Column

Plot

Error in match.arg(tolower(range), c("range", "iqr", "ci", "hdi", "eti", : 'arg' should be one of "range", "iqr", "ci", "hdi", "eti", "sd", "mad"
Error in lapply(text_modelbased, function(i) {: object 'text_modelbased' not found
Error in is.ggplot(x): object 'all_plots' not found

Column

Tabular summary

Error in eval(expr, envir, enclos): object 'text_modelbased' not found

Text reports

Column

Textual summary

We fitted a linear mixed model (estimated using ML and nloptwrap optimizer) to predict Reaction with Days (formula: Reaction ~ Days). The model included Days as random effects (formula: ~Days | Subject). The model’s total explanatory power is substantial (conditional R2 = 0.79) and the part related to the fixed effects alone (marginal R2) is of 0.29. The model’s intercept, corresponding to Days = 0, is at 251.41 (95% CI (238.32, 264.49), t(174) = 37.91, p < .001). Within this model:

  • The effect of Days is statistically significant and positive (beta = 10.47, 95% CI (7.50, 13.43), t(174) = 6.97, p < .001; Std. beta = 0.54, 95% CI (0.38, 0.69))

Standardized parameters were obtained by fitting the model on a standardized version of the dataset. 95% Confidence Intervals (CIs) and p-values were computed using a Wald t-distribution approximation.

The model’s total explanatory power is substantial (conditional R2 = 0.79) and the part related to the fixed effects alone (marginal R2) is of 0.29

Column

Model information

---
title: "Regression model summary from `{easystats}`"
output: 
  flexdashboard::flex_dashboard:
    theme:
      version: 4
      # bg: "#101010"
      # fg: "#FDF7F7" 
      primary: "#0054AD"
      base_font:
        google: Prompt
      code_font:
        google: JetBrains Mono
params:
  model: model
  check_model_args: check_model_args
  parameters_args: parameters_args
  performance_args: performance_args
---

```{r setup, include=FALSE}
library(flexdashboard)
library(easystats)

# Since not all regression model are supported across all packages, make the
# dashboard chunks more fault-tolerant. E.g. a model might be supported in
# `{parameters}`, but not in `{report}`.
#
# For this reason, `error = TRUE`
knitr::opts_chunk$set(
  error = TRUE,
  out.width = "100%"
)
```

```{r}
# Get user-specified model data
model <- params$model

# Is it supported by `{easystats}`? Skip evaluation of the following chunks if not.
is_supported <- insight::is_model_supported(model)

if (!is_supported) {
  unsupported_message <- sprintf(
    "Unfortunately, objects of class '%s' are not yet supported in {easystats}.\n
    For a list of supported models, see `insight::supported_models()`.",
    class(model)[1]
  )
}
```


Model fit 
=====================================  

Column {data-width=700}
-----------------------------------------------------------------------

### Assumption checks

```{r check-model, eval=is_supported, fig.height=10, fig.width=10}
check_model_args <- c(list(model), params$check_model_args)
do.call(performance::check_model, check_model_args)
```

```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=300}
-----------------------------------------------------------------------

### Indices of model fit

```{r, eval=is_supported}
# `{performance}`
performance_args <- c(list(model), params$performance_args)
table_performance <- do.call(performance::performance, performance_args)
print_md(table_performance, layout = "vertical", caption = NULL)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

For interpretation of performance metrics, please refer to <a href="https://easystats.github.io/performance/reference/model_performance.html" target="_blank">this documentation</a>.

Parameter estimates
=====================================  

Column {data-width=550}
-----------------------------------------------------------------------

### Plot

```{r dot-whisker, eval=is_supported}
# `{parameters}`
parameters_args <- c(list(model), params$parameters_args)
table_parameters <- do.call(parameters::parameters, parameters_args)

plot(table_parameters)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=450}
-----------------------------------------------------------------------

### Tabular summary

```{r, eval=is_supported}
print_md(table_parameters, caption = NULL)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

To find out more about table summary options, please refer to <a href="https://easystats.github.io/parameters/reference/model_parameters.html" target="_blank">this documentation</a>.


Predicted Values
=====================================  

Column {data-width=600}
-----------------------------------------------------------------------

### Plot

```{r expected-values, eval=is_supported, fig.height=10, fig.width=10}
# `{modelbased}`
int_terms <- find_interactions(model, component = "conditional", flatten = TRUE)
con_terms <- find_variables(model)$conditional

if (is.null(int_terms)) {
  model_terms <- con_terms
} else {
  model_terms <- clean_names(int_terms)
  int_terms <- unique(unlist(strsplit(clean_names(int_terms), ":", fixed = TRUE)))
  model_terms <- c(model_terms, setdiff(con_terms, int_terms))
}

text_modelbased <- lapply(unique(model_terms), function(i) {
  grid <- get_datagrid(model, at = i, range = "grid", preserve_range = FALSE)
  estimate_expectation(model, data = grid)
})

ggplot2::theme_set(theme_modern())
# all_plots <- lapply(text_modelbased, function(i) {
#   out <- do.call(visualisation_recipe, c(list(i), modelbased_args))
#   plot(out) + ggplot2::ggtitle("")
# })
all_plots <- lapply(text_modelbased, function(i) {
  out <- visualisation_recipe(i, show_data = "none")
  plot(out) + ggplot2::ggtitle("")
})

see::plots(all_plots, n_columns = round(sqrt(length(text_modelbased))))
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=400}
-----------------------------------------------------------------------

### Tabular summary

```{r, eval=is_supported, results="asis"}
for (i in text_modelbased) {
  tmp <- print_md(i)
  tmp <- gsub("Variable predicted", "\nVariable predicted", tmp)
  tmp <- gsub("Predictors modulated", "\nPredictors modulated", tmp)
  tmp <- gsub("Predictors controlled", "\nPredictors controlled", tmp)
  print(tmp)
}
```


```{r, eval=!is_supported}
cat(unsupported_message)
```


Text reports
=====================================    

Column {data-width=500}
-----------------------------------------------------------------------

### Textual summary

```{r, eval=is_supported, results='asis', collapse=TRUE}
# `{report}`
text_report <- report(model)
text_report_performance <- report_performance(model)

gsub("]", ")", gsub("[", "(", text_report, fixed = TRUE), fixed = TRUE)
cat("\n")
gsub("]", ")", gsub("[", "(", text_report_performance, fixed = TRUE), fixed = TRUE)
```


```{r, eval=!is_supported}
cat(unsupported_message)
```

Column {data-width=500}
-----------------------------------------------------------------------

### Model information

```{r, eval=is_supported}
model_info_data <- insight::model_info(model)

model_info_data <- datawizard::data_to_long(as.data.frame(model_info_data))

DT::datatable(model_info_data)
```

```{r, eval=!is_supported}
cat(unsupported_message)
```